113 research outputs found
Particle filter-based Gaussian process optimisation for parameter inference
We propose a novel method for maximum likelihood-based parameter inference in
nonlinear and/or non-Gaussian state space models. The method is an iterative
procedure with three steps. At each iteration a particle filter is used to
estimate the value of the log-likelihood function at the current parameter
iterate. Using these log-likelihood estimates, a surrogate objective function
is created by utilizing a Gaussian process model. Finally, we use a heuristic
procedure to obtain a revised parameter iterate, providing an automatic
trade-off between exploration and exploitation of the surrogate model. The
method is profiled on two state space models with good performance both
considering accuracy and computational cost.Comment: Accepted for publication in proceedings of the 19th World Congress of
the International Federation of Automatic Control (IFAC), Cape Town, South
Africa, August 2014. 6 pages, 4 figure
Particle Metropolis-Hastings using gradient and Hessian information
Particle Metropolis-Hastings (PMH) allows for Bayesian parameter inference in
nonlinear state space models by combining Markov chain Monte Carlo (MCMC) and
particle filtering. The latter is used to estimate the intractable likelihood.
In its original formulation, PMH makes use of a marginal MCMC proposal for the
parameters, typically a Gaussian random walk. However, this can lead to a poor
exploration of the parameter space and an inefficient use of the generated
particles.
We propose a number of alternative versions of PMH that incorporate gradient
and Hessian information about the posterior into the proposal. This information
is more or less obtained as a byproduct of the likelihood estimation. Indeed,
we show how to estimate the required information using a fixed-lag particle
smoother, with a computational cost growing linearly in the number of
particles. We conclude that the proposed methods can: (i) decrease the length
of the burn-in phase, (ii) increase the mixing of the Markov chain at the
stationary phase, and (iii) make the proposal distribution scale invariant
which simplifies tuning.Comment: 27 pages, 5 figures, 2 tables. The final publication is available at
Springer via: http://dx.doi.org/10.1007/s11222-014-9510-
Quasi-Newton particle Metropolis-Hastings
Particle Metropolis-Hastings enables Bayesian parameter inference in general
nonlinear state space models (SSMs). However, in many implementations a random
walk proposal is used and this can result in poor mixing if not tuned correctly
using tedious pilot runs. Therefore, we consider a new proposal inspired by
quasi-Newton algorithms that may achieve similar (or better) mixing with less
tuning. An advantage compared to other Hessian based proposals, is that it only
requires estimates of the gradient of the log-posterior. A possible application
is parameter inference in the challenging class of SSMs with intractable
likelihoods. We exemplify this application and the benefits of the new proposal
by modelling log-returns of future contracts on coffee by a stochastic
volatility model with -stable observations.Comment: 23 pages, 5 figures. Accepted for the 17th IFAC Symposium on System
Identification (SYSID), Beijing, China, October 201
Sequential Kernel Herding: Frank-Wolfe Optimization for Particle Filtering
Recently, the Frank-Wolfe optimization algorithm was suggested as a procedure
to obtain adaptive quadrature rules for integrals of functions in a reproducing
kernel Hilbert space (RKHS) with a potentially faster rate of convergence than
Monte Carlo integration (and "kernel herding" was shown to be a special case of
this procedure). In this paper, we propose to replace the random sampling step
in a particle filter by Frank-Wolfe optimization. By optimizing the position of
the particles, we can obtain better accuracy than random or quasi-Monte Carlo
sampling. In applications where the evaluation of the emission probabilities is
expensive (such as in robot localization), the additional computational cost to
generate the particles through optimization can be justified. Experiments on
standard synthetic examples as well as on a robot localization task indicate
indeed an improvement of accuracy over random and quasi-Monte Carlo sampling.Comment: in 18th International Conference on Artificial Intelligence and
Statistics (AISTATS), May 2015, San Diego, United States. 38, JMLR Workshop
and Conference Proceeding
“Tax Simplification”—Grave Threat to the Charitable Contribution Deduction: The Problem and a Proposed Solution
The present National Administration has continued to support proposed legislative changes aimed at substantially reducing the number of income tax returns in which deductions are itemized. The author contends that these tax simplification proposals are incompatible with the preservation of the charitable contribution deduction and would undermine the position of voluntary charitable organizations by reducing the incentives for giving. He proposes a solution to this dilemma by promoting the charitable contribution deduction, with certain limitations, to the position of a deduction from gross income, rather than a deduction from adjusted gross income
- …